perm filename SEJNOW.1[LET,JMC] blob
sn#840701 filedate 1987-05-30 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Prof. Terrence J. Sejnowski
C00005 ENDMK
Cā;
Prof. Terrence J. Sejnowski
Department of Biophysics
Johns Hopkins University
Baltimore, MD 21218
%terry@hopkins 301 338-8687
Dear Terry:
I found your ``Learning and Respresentation in Connectionist Models''
very thought-provoking, and here are some of the thoughts it provoked.
The following human ability should be considered whether your
model of reading aloud is considered as AI or as brain theory. One
can be told that in the new Chinese latin spelling the letter Q is
pronounced $//C//$, and that in French the letter G before A is
pronounced $//g//$ and before E is $//J//$. One can then use these
facts immediately in reading aloud. Therefore it seems unlikely that
the information needed for reading aloud is stored solely in connections.
One can imagine that new information is interepreted in the fly
if there isn't too much of it, and uses this practice to train some
network. One can also imagine a more elaborate network that has rules
as additional inputs. In any case the ability to be told existing
in addition to the ability to be trained requires discussion.
At the phonetic end it is more difficult for the untrained person
to be told about new sounds or conventions about how phonemes are
modified by their environments. Maybe this is only because we learn
phonetic skills at age two and orthography much later. Certainly general
linguistic training results in the ability to learn phonetic conventions
by being told or even from books. I remember being surprised that my
daughters found difficulty in learning the voiced-unvoiced distinction
for consonants. However, once it was brought to their attention, the
7 year old and the 12 year old learned it almost equally quickly.
Enclosed is some information about AAAI support of workshops.
I enjoyed our discussions in Paris.
Sincerely,